Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable pack hpu and from_quantized #7

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

HolyFalafel
Copy link

@HolyFalafel HolyFalafel commented Jul 31, 2024

Revert "Removed hpu pack until we'll implement it in HPU"
This reverts commit 92a8d41.

  • Enabled test_quantization and AutoGPTQForCausalLM.from_quantized

HolyFalafel and others added 3 commits July 30, 2024 11:11
* Supporting llama int4 quantization using AutoGPTQ

* Running only PT code (similar to cuda_old) on HPU

* Testing convert_from_int4

* Started cleanup

* code cleanup

* Added weight reshape in preprocessing
Added llama7b generation hpu test

* Changed reshape to match matmul (still not accurate) and fixed q4 test

* Fixing zero points

* Update pack function

* Fixed accuracy

* Uncommented exllama

* Marlin test fix + added hpu bias test

* Review comments

* Removed hpu pack until we'll implement it in HPU

---------

Co-authored-by: yan tomsinsky <[email protected]>
@HolyFalafel HolyFalafel changed the title Dev/danny/enable pack hpu and test Enable pack hpu and from_quantized Jul 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant